49 research outputs found
Efficient estimation of parameters in marginals in semiparametric multivariate models
Recent literature on semiparametric copula models focused on the situation when the marginals are specified nonparametrically and the copula function is given a parametric form. For example, this setup is used in Chen, Fan and Tsyrennikov (2006) [Efficient Estimation of Semiparametric Multivariate Copula Models, JASA] who focus on efficient estimation of copula parameters. We consider a reverse situation when the marginals are specified parametrically and the copula function is modelled nonparametrically. This setting is no less relevant in applications. We use the method of sieve for efficient estimation of parameters in marginals, derive its asymptotic distribution and show that the estimator is semiparametrically efficient. Simulations suggest that the sieve MLE can be up to 40% more efficient relative to QMLE depending on the strength of dependence between the marginals. An application using insurance company loss and expense data demonstrates empirical relevance of this setting.
Efficient estimation of parameters in marginal in semiparametric multivariate models
We consider a general multivariate model where univariate marginal distributions are known up to a common parameter vector and we are interested in estimating that vector without assuming anything about the joint distribution, except for the marginals. If we assume independence between the marginals and maximize the resulting quasi-likelihood, we obtain a consistent but inefficient estimate. If we assume a parametric copula (other than independence) we obtain a full MLE, which is efficient but only under correct copula specification and badly biased if the copula is misspecified. Instead we propose a sieve MLE estimator which improves over OMLE but does not suffer the drawbacks of the full MLE. We model the unknown part of the joint distribution using the Bernstein-Kantorovich polynomial copula and assess the resulting improvement over QMLE and over misspecified FMLE in terms of relative efficiency and robustness. We derive the asymptotic distribution of the new estimator and show that it reaches the semiparametric efficiency bound. Simulations suggest that the sieve MLE can be almost as efficient as FMLE relative to QMLE provided there is enough dependence between the marginals. An application using insurance company loss and expense data demonstrates empirical relevance of the estimator
Asset price dynamics with small world interactions under hetereogeneous beliefs
We propose a simple model of a financial market populated with heterogeneous agents. The market represents a network with nodes symbolizing the agents and edges standing for connections between them, thus, embodying local interactions in the market. By local interactions we mean any kind of interplay between the decisions of the agents unaffected by the market mechanism and unrelated to the physical distance between the agents. Using the rewiring procedure we restructure a network from regular lattice to random graph by varying the probability of the agents to switch from one trading strategy to another. We study how the network structure influences the asset price dynamics. The results show that for some intermediate values of the probability to switch, corresponding to a small world network, the price dynamics become reminiscent to the real. While for the boundary values of the probability the dynamics lacks some typical features of the real financial markets.local interactions, networks, small world, heterogeneous beliefs, price dynamics, bifurcations, chaos
Efficient estimation of parameters in marginal in semiparametric multivariate models
We consider a general multivariate model where univariate marginal distributions are known up to a common parameter vector and we are interested in estimating that vector without assuming anything about the joint distribution, except for the marginals. If we assume independence between the marginals and maximize the resulting quasi-likelihood, we obtain a consistent but inefficient estimate. If we assume a parametric copula (other than independence) we obtain a full MLE, which is efficient but only under correct copula specification and badly biased if the copula is misspecified. Instead we propose a sieve MLE estimator which improves over OMLE but does not suffer the drawbacks of the full MLE. We model the unknown part of the joint distribution using the Bernstein-Kantorovich polynomial copula and assess the resulting improvement over QMLE and over misspecified FMLE in terms of relative efficiency and robustness. We derive the asymptotic distribution of the new estimator and show that it reaches the semiparametric efficiency bound. Simulations suggest that the sieve MLE can be almost as efficient as FMLE relative to QMLE provided there is enough dependence between the marginals. An application using insurance company loss and expense data demonstrates empirical relevance of the estimator
Out-of-sample comparison of copula specifications in multivariate density forecasts
We introduce a statistical test for comparing the predictive accuracy of competing copula specifications in multivariate density forecasts, based on the Kullback-Leibler Information Criterion (KLIC). The test is valid under general conditions: in particular it allows for parameter estimation uncertainty and for the copulas to be nested or nonnested. Monte Carlo simulations demonstrate that the proposed test has satisfactory size and power properties in finite samples. Applying the test to daily exchange rate returns of several major currencies against the US dollar we find that the Student’s t copula is favored over Gaussian, Gumbel and Clayton copulas. This suggests that these exchange rate returns are characterized by symmetric tail dependence.Copula-based density forecast; semiparametric statistics; out-of-sample forecast evaluation; Kullback-Leibler Information Criterion; empirical copula
Recommended from our members
Measuring costs of carbon sequestration in northwest Russia
Global warming that may cause environmental catastrophes, dramatic economic losses and, in extreme case, may lead to an extinction of human race, is driven by anthropogenic emissions of greenhouse gases (carbon dioxide, methane, nitrous oxide and others) into atmosphere. It has been shown that forests can efficiently absorb carbon from the atmosphere and reduce the concentration of greenhouse gases mitigating climatic change. In this study we explore environmentally oriented forest management options for carbon mitigation. We concentrate on Northwest Russia, St. Petersburg region in particular. This research is a part of larger project comparing carbon dynamics in two ecosystems: U.S. Pacific Northwest and Northwest Russia. We use STANIDCARB model to simulate the growth of forest and account for sequestered carbon that allow exploring the effect of different management regimes on carbon storage and economic value. We evaluate 140 regimes with different combinations of rotation length, regeneration type, intensity and frequency of thinnings. We employ Data Envelopment Analysis to identify the set of carbon and profit efficient management regimes. The set of efficient points comprises production possibility frontier that shows a tradeoff between stored carbon and monetary value. Then, we measure the marginal costs of carbon sequestration along the production possibility frontier. The results suggested that the marginal costs of carbon sequestration exhibit diminishing returns and are negatively correlated with the discount rate. At 4% discount rate the marginal costs vary from 0.08 to 4.71 USD
Synthetic Controls with Multiple Outcomes: Estimating the Effects of Non-Pharmaceutical Interventions in the COVID-19 Pandemic
We propose a generalization of the synthetic control method to a
multiple-outcome framework, which improves the reliability of treatment effect
estimation. This is done by supplementing the conventional pre-treatment time
dimension with the extra dimension of related outcomes in computing the
synthetic control weights. Our generalization can be particularly useful for
studies evaluating the effect of a treatment on multiple outcome variables. To
illustrate our method, we estimate the effects of non-pharmaceutical
interventions (NPIs) on various outcomes in Sweden in the first 3 quarters of
2020. Our results suggest that if Sweden had implemented stricter NPIs like the
other European countries by March, then there would have been about 70% fewer
cumulative COVID-19 infection cases and deaths by July, and 20% fewer deaths
from all causes in early May, whereas the impacts of the NPIs were relatively
mild on the labor market and economic outcomes
Wright meets Markowitz: How standard portfolio theory changes when assets are technologies following experience curves
We consider how to optimally allocate investments in a portfolio of competing
technologies using the standard mean-variance framework of portfolio theory. We
assume that technologies follow the empirically observed relationship known as
Wright's law, also called a "learning curve" or "experience curve", which
postulates that costs drop as cumulative production increases. This introduces
a positive feedback between cost and investment that complicates the portfolio
problem, leading to multiple local optima, and causing a trade-off between
concentrating investments in one project to spur rapid progress vs.
diversifying over many projects to hedge against failure. We study the
two-technology case and characterize the optimal diversification in terms of
progress rates, variability, initial costs, initial experience, risk aversion,
discount rate and total demand. The efficient frontier framework is used to
visualize technology portfolios and show how feedback results in nonlinear
distortions of the feasible set. For the two-period case, in which learning and
uncertainty interact with discounting, we compare different scenarios and find
that the discount rate plays a critical role
Partial Likelihood-Based Scoring Rules for Evaluating Density Forecasts in Tails
We propose new scoring rules based on partial likelihood for assessing the relative out-of-sample predictive accuracy of competing density forecasts over a specific region of interest, such as the left tail in financial risk management. By construction, existing scoring rules based on weighted likelihood or censored normal likelihood favor density forecasts with more probability mass in the given region, rendering predictive accuracy tests biased towards such densities. Our novel partial likelihood-based scoring rules do not suffer from this problem, as illustrated by means of Monte Carlo simulations and an empirical application to daily S&P 500 index returns
Efficiency of continuous double auctions under individual evolutionary learning with full or limited information
In this paper we explore how specific aspects of market transparency
and agents’ behavior affect the efficiency of the market outcome. In particular,
we are interested whether learning behavior with and without information
about actions of other participants improves market efficiency. We consider
a simple market for a homogeneous good populated by buyers and sellers.
The valuations of the buyers and the costs of the sellers are given exogenously.
Agents are involved in consecutive trading sessions, which are organized as
a continuous double auction with order book. Using Individual Evolutionary
Learning agents submit price bids and offers, trying to learn the most profitable
strategy by looking at their realized and counterfactual or “foregone” payoffs.
We find that learning outcomes heavily depend on information treatments.
Under full information about actions of others, agents’ orders tend to be
similar, while under limited information agents tend to submit their valuations/
costs. This behavioral outcome results in higher price volatility for the latter treatment. We also find that learning improves allocative efficiency when
compared to outcomes Zero-Intelligent traders